1,542 research outputs found

    Where should livestock graze? Integrated modeling and optimization to guide grazing management in the Cañete basin, Peru

    Get PDF
    Integrated watershed management allows decision-makers to balance competing objectives, for example agricultural production and protection of water resources. Here, we developed a spatially-explicit approach to support such management in the Cañete watershed, Peru. We modeled the effect of grazing management on three services – livestock production, erosion control, and baseflow provision – and used an optimization routine to simulate landscapes providing the highest level of services. Over the entire watershed, there was a trade-off between livestock productivity and hydrologic services and we identified locations that minimized this trade-off for a given set of preferences. Given the knowledge gaps in ecohydrology and practical constraints not represented in the optimizer, we assessed the robustness of spatial recommendations, i.e. revealing areas most often selected by the optimizer. We conclude with a discussion of the practical decisions involved in using optimization frameworks to inform watershed management programs, and the research needs to better inform the design of such programs

    Transparent and feasible uncertainty assessment adds value to applied ecosystem services modeling

    Get PDF
    We introduce a special issue that aims to simultaneously motivate interest in uncertainty assessment (UA) and reduce the barriers practitioners face in conducting it. The issue, “Demonstrating transparent, feasible, and useful uncertainty assessment in ecosystem services modeling,” responds to findings from a 2016 workshop of academics and practitioners that identified challenges and potential solutions to enhance the practice of uncertainty assessment in the ES community. Participants identified that one important gap was the lack of a compelling set of cases showing that UA can be feasibly conducted at varying levels of sophistication, and that such assessment can usefully inform decision-relevant modeling conclusions. This article orients the reader to the 11 other articles that comprise the special issue, and which span multiple methods and application domains, all with an explicit consideration of uncertainty. We highlight the value of UA demonstrated in the articles, including changing decisions, facilitating transparency, and clarifying the nature of evidence. We conclude by suggesting ways to promote further adoption of uncertainty analysis in ecosystem service assessments. These include: Easing the analytic workflows involved in UA while guarding against rote analyses, applying multiple models to the same problem, and learning about the conduct and value of UA from other disciplines

    Complexidade e escala na investigação da eficåcia do ensino: reflexÔes do estudo MET

    Get PDF
    Researchers and policymakers in the US and beyond increasingly seek to identify teaching qualities that are associated with academic achievement gains for K-12 students through effectiveness studies. Yet teaching quality varies with academic content and social contexts, involves multiple participants, and requires a range of skills, knowledge, and dispositions. In this essay, we address the inescapable tension between complexity and scale in research on teaching effectiveness. We provide five recommendations to study designers and analysts to manage this tension to enhance effectiveness research, drawing on our recent experiences as the first external analysts of the Measures of Effective Teaching (MET) study. Our recommendations address conceptual framing, the measurement of teaching (e.g., observation protocols, student surveys), sampling, classroom videoing, and the use and interpretation of value-added models.Investigadores y legisladores en los Estados Unidos y en otros paĂ­ses buscan identificar las cualidades de la enseñanza que se asocian con incrementos de desempeño acadĂ©mico para alumnos de primaria y secundaria a travĂ©s de estudios de eficacia. Sin embargo, la calidad de la enseñanza varĂ­a segĂșn el contenido acadĂ©mico y los contextos sociales, involucra a mĂșltiples participantes y requiere una variedad de habilidades, conocimientos y disposiciones. En este ensayo, abordamos la ineludible tensiĂłn entre la complejidad y la escala en la investigaciĂłn sobre la eficacia de la enseñanza. Proveemos cinco recomendaciones a los diseñadores de estudios y analistas para manejar esta tensiĂłn y mejorar la investigaciĂłn de eficacia, aprovechando nuestras experiencias recientes como los primeros analistas externos del estudio Measures of Effective Teaching (MET). Nuestras recomendaciones abordan el marco conceptual, la mediciĂłn de la enseñanza (por ej., protocolos de observaciĂłn, encuestas de estudiantes), el muestreo, el video en el aula y el uso e interpretaciĂłn de modelos de valor agregado.Pesquisadores e legisladores nos Estados Unidos e em outros paĂ­ses buscam identificar as qualidades de ensino associadas ao aumento do desempenho acadĂȘmico de alunos do ensino fundamental e mĂ©dio por meio de estudos de eficĂĄcia. No entanto, a qualidade do ensino varia de acordo com o conteĂșdo acadĂȘmico e os contextos sociais, envolve mĂșltiplos participantes e requer uma variedade de habilidades, conhecimentos e disposiçÔes. Neste ensaio, abordamos a tensĂŁo inescapĂĄvel entre complexidade e escala na pesquisa sobre a eficĂĄcia do ensino. Fornecemos cinco recomendaçÔes para projetistas e analistas de estudo para gerenciar essa tensĂŁo e melhorar a pesquisa sobre eficĂĄcia, alavancando nossas experiĂȘncias recentes como os primeiros analistas externos do estudo Measures of Effective Teaching (MET). Nossas recomendaçÔes abordam a estrutura conceitual, a medição do ensino (por exemplo, protocolos de observação, pesquisas com estudantes), amostragem, vĂ­deo em sala de aula e o uso e interpretação de modelos de valor agregado

    Invisible Belfast:Flat ontologies and remediation of the post-conflict city

    Get PDF
    [in]visible Belfast was a research-driven indie alternate reality game (ARG) that ran for 6 weeks during the spring of 2011 in Belfast and was subsequently adapted, 5 years later into a fictional documentary for BBC Radio 4. The ARG is a participatory and dispersed narrative, which the audience play through. The text expands outward across both physical and digital platforms to create a mystery for the players using everyday platforms. The ARG is a product of media convergence and at its heart transmedial, defined by its complexity and modes of participation. The fictional radio documentary which remediated the ARG into a more simple linear structure, but possibly a more complex narrative form, retells parts of the story for new audiences. The premise of [in]visible belfast – the game and later the documentary – is itself an adaptation of writer Ciaran Carson’s novel The Star Factory (1997): a postmodern adventure through the complex psychogeography of Belfast. A trail through the labyrinthine text, which paints the history of Belfast in poetic prose. This article will map the concept’s journey from novel to game to radio, contextualising its development within its political and urban landscape and charting the remediation of the narratives as they fold out across multiple media and complex story arches. The article will draw together ideas from previous publications on ARG, transmediality and complex textualities from the authors and reflect on the textual trajectories that the remediation of the narrative has taken from the original book, through the ARG, into the radio documentary. Building upon recent approaches from environmental philosopher Tim Morton and games theorist Ian Bogost, the authors argue that Belfast’s history propels medial adaptations of a particular kind, characterised by a ‘flat’ ontology of space and time and a sort of diffuse and dark urban experience for designers/producers and players/listeners

    Measurement of the cross-section and charge asymmetry of WW bosons produced in proton-proton collisions at s=8\sqrt{s}=8 TeV with the ATLAS detector

    Get PDF
    This paper presents measurements of the W+→Ό+ÎœW^+ \rightarrow \mu^+\nu and W−→Ό−ΜW^- \rightarrow \mu^-\nu cross-sections and the associated charge asymmetry as a function of the absolute pseudorapidity of the decay muon. The data were collected in proton--proton collisions at a centre-of-mass energy of 8 TeV with the ATLAS experiment at the LHC and correspond to a total integrated luminosity of 20.2~\mbox{fb^{-1}}. The precision of the cross-section measurements varies between 0.8% to 1.5% as a function of the pseudorapidity, excluding the 1.9% uncertainty on the integrated luminosity. The charge asymmetry is measured with an uncertainty between 0.002 and 0.003. The results are compared with predictions based on next-to-next-to-leading-order calculations with various parton distribution functions and have the sensitivity to discriminate between them.Comment: 38 pages in total, author list starting page 22, 5 figures, 4 tables, submitted to EPJC. All figures including auxiliary figures are available at https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/STDM-2017-13

    Search for chargino-neutralino production with mass splittings near the electroweak scale in three-lepton final states in √s=13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for supersymmetry through the pair production of electroweakinos with mass splittings near the electroweak scale and decaying via on-shell W and Z bosons is presented for a three-lepton final state. The analyzed proton-proton collision data taken at a center-of-mass energy of √s=13  TeV were collected between 2015 and 2018 by the ATLAS experiment at the Large Hadron Collider, corresponding to an integrated luminosity of 139  fb−1. A search, emulating the recursive jigsaw reconstruction technique with easily reproducible laboratory-frame variables, is performed. The two excesses observed in the 2015–2016 data recursive jigsaw analysis in the low-mass three-lepton phase space are reproduced. Results with the full data set are in agreement with the Standard Model expectations. They are interpreted to set exclusion limits at the 95% confidence level on simplified models of chargino-neutralino pair production for masses up to 345 GeV

    Search for new phenomena in final states with an energetic jet and large missing transverse momentum in pp collisions at √ s = 8 TeV with the ATLAS detector

    Get PDF
    Results of a search for new phenomena in final states with an energetic jet and large missing transverse momentum are reported. The search uses 20.3 fb−1 of √ s = 8 TeV data collected in 2012 with the ATLAS detector at the LHC. Events are required to have at least one jet with pT > 120 GeV and no leptons. Nine signal regions are considered with increasing missing transverse momentum requirements between Emiss T > 150 GeV and Emiss T > 700 GeV. Good agreement is observed between the number of events in data and Standard Model expectations. The results are translated into exclusion limits on models with either large extra spatial dimensions, pair production of weakly interacting dark matter candidates, or production of very light gravitinos in a gauge-mediated supersymmetric model. In addition, limits on the production of an invisibly decaying Higgs-like boson leading to similar topologies in the final state are presente

    Seaview Survey Photo-quadrat and Image Classification Dataset

    Get PDF
    The primary scientific dataset arising from the XL Catlin Seaview Survey project is the “Seaview Survey Photo-quadrat and Image Classification Dataset”, consisting of: (1) over one million standardised, downward-facing “photo-quadrat” images covering approximately 1m2 of the sea floor; (2) human-classified annotations that can be used to train and validate image classifiers;\ua0(3) benthic cover data arising from the application of machine learning classifiers to the photo-quadrats; and\ua0(4)\ua0the triplets of raw images (covering 360o) from which the photo-quadrats were derived.Photo-quadrats were collected between 2012 and 2018 at 860 transect locations around the world, including: the Caribbean and Bermuda, the Indian Ocean (Maldives, Chagos Archipelago), the Coral Triangle (Indonesia, Philippines, Timor-Leste, Solomon Islands), the Great Barrier Reef, Taiwan and Hawaii.For additional information regarding methodology, data structure, organization and size, please see attached document “Dataset documentation”

    Neuroimaging-Based Classification of PTSD Using Data-Driven Computational Approaches:A Multisite Big Data Study from the ENIGMA-PGC PTSD Consortium

    Get PDF
    BACKGROUND: Recent advances in data-driven computational approaches have been helpful in devising tools to objectively diagnose psychiatric disorders. However, current machine learning studies limited to small homogeneous samples, different methodologies, and different imaging collection protocols, limit the ability to directly compare and generalize their results. Here we aimed to classify individuals with PTSD versus controls and assess the generalizability using a large heterogeneous brain datasets from the ENIGMA-PGC PTSD Working group.METHODS: We analyzed brain MRI data from 3,477 structural-MRI; 2,495 resting state-fMRI; and 1,952 diffusion-MRI. First, we identified the brain features that best distinguish individuals with PTSD from controls using traditional machine learning methods. Second, we assessed the utility of the denoising variational autoencoder (DVAE) and evaluated its classification performance. Third, we assessed the generalizability and reproducibility of both models using leave-one-site-out cross-validation procedure for each modality.RESULTS: We found lower performance in classifying PTSD vs. controls with data from over 20 sites (60% test AUC for s-MRI, 59% for rs-fMRI and 56% for d-MRI), as compared to other studies run on single-site data. The performance increased when classifying PTSD from HC without trauma history in each modality (75% AUC). The classification performance remained intact when applying the DVAE framework, which reduced the number of features. Finally, we found that the DVAE framework achieved better generalization to unseen datasets compared with the traditional machine learning frameworks, albeit performance was slightly above chance.CONCLUSION: These results have the potential to provide a baseline classification performance for PTSD when using large scale neuroimaging datasets. Our findings show that the control group used can heavily affect classification performance. The DVAE framework provided better generalizability for the multi-site data. This may be more significant in clinical practice since the neuroimaging-based diagnostic DVAE classification models are much less site-specific, rendering them more generalizable.</p
    • 

    corecore